Fast Kronecker Product Kernel Methods via Generalized Vec Trick.

نویسندگان

  • Antti Airola
  • Tapio Pahikkala
چکیده

Kronecker product kernel provides the standard approach in the kernel methods' literature for learning from graph data, where edges are labeled and both start and end vertices have their own feature representations. The methods allow generalization to such new edges, whose start and end vertices do not appear in the training data, a setting known as zero-shot or zero-data learning. Such a setting occurs in numerous applications, including drug-target interaction prediction, collaborative filtering, and information retrieval. Efficient training algorithms based on the so-called vec trick that makes use of the special structure of the Kronecker product are known for the case where the training data are a complete bipartite graph. In this paper, we generalize these results to noncomplete training graphs. This allows us to derive a general framework for training Kronecker product kernel methods, as specific examples we implement Kronecker ridge regression and support vector machine algorithms. Experimental results demonstrate that the proposed approach leads to accurate models, while allowing order of magnitude improvements in training and prediction time.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Kronecker Square Roots and the Block Vec Matrix

Using the block vec matrix, I give a necessary and sufficient condition for factorization of a matrix into the Kronecker product of two other matrices. As a consequence, I obtain an elementary algorithmic procedure to decide whether a matrix has a square root for the Kronecker product. Introduction My statistician colleague, J.E. Chacón, asked me how to decide if a real given matrix A has a squ...

متن کامل

Matrix Algebra , Class Notes ( part 2 )

Convention: Let A be a T×m matrix; the notation vec(A) will mean the Tmelement column vector whose first set of T elements are the first colum of A, that is a.1 using the dot notation for columns; the second set of T elements are those of the second column of A, a.2, and so on. Thus A = [a.1, a.2, · · · , a.m] in the dot notation. An immediate consequences of the above Convention is Vec of a pr...

متن کامل

Fast Gradient Computation for Learning with Tensor Product Kernels and Sparse Training Labels

Supervised learning with pair-input data has recently become one of the most intensively studied topics in pattern recognition literature, and its applications are numerous, including, for example, collaborative filtering, information retrieval, and drug-target interaction prediction. Regularized least-squares (RLS) is a kernel-based learning algorithm that, together with tensor product kernels...

متن کامل

Cartesian Kernel: An Efficient Alternative to the Pairwise Kernel

Pairwise classification has many applications including network prediction, entity resolution, and collaborative filtering. The pairwise kernel has been proposed for those purposes by several research groups independently, and has been used successfully in several fields. In this paper, we propose an efficient alternative which we call a Cartesian kernel. While the existing pairwise kernel (whi...

متن کامل

Kronecker Factorization for Speeding up Kernel Machines

In kernel machines, such as kernel principal component analysis (KPCA), Gaussian Processes (GPs), and Support Vector Machines (SVMs), the computational complexity of finding a solution is O(n), where n is the number of training instances. To reduce this expensive computational complexity, we propose using Kronecker factorization, which approximates a positive definite kernel matrix by the Krone...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IEEE transactions on neural networks and learning systems

دوره   شماره 

صفحات  -

تاریخ انتشار 2017